Conversation
5e20d59 to
7f890e2
Compare
|
@Rowno ready for review |
|
@Rowno In the end something you was worried about has happened: we do have a flaky test. And it is flaky on CI only, which makes it completely difficult to debug. As of now I do have 0 ideas of why is it happening: even though CI is much less powerful than my local machine, but still CPU threshold should make it even 4x times slower, so the tests should always pass. I will try to research this a little bit. |
|
@Rowno From this thread I see there is a problem in CPU throttling of Chrome running inside docker. I believe it is better to remove this failing test: it does not provide as much as quality, as number of problems it creates. |
|
Yeah, I would remove the test. |
76d6b68 to
24e2293
Compare
|
@Rowno ahaha, I've also deleted it in this PR. Updated PR, ready for review |
|
@Rowno WDYT about this change? |
|
@Rowno ping |
Adds a special
--ramflag andisRamMeasuredflag toReactBenchmark.If passed RAM measurement is enabled. In this case, 2 metrics are being recorded between the runs:
JSHeapUsedSize) represents how much RAM was consumed at the end of a test iteration.Object.prototyperepresents how many objects were created in RAM at the end of a test iteration. Why is it interesting? JS engine sometimes optimizes a code in different ways which in turn changes its memory footprint. source / look at the "Counting all the objects" sectionP.S. @Rowno this PR should be merged after #30 to avoid extra conflicts